out-of-distribution sample
- Asia > China > Jiangsu Province > Nanjing (0.05)
- Asia > China > Zhejiang Province > Hangzhou (0.05)
- North America > United States > California > Santa Cruz County > Santa Cruz (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
POODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution Samples
In this work, we propose to use out-of-distribution samples, i.e., unlabeled samples coming from outside the target classes, to improve few-shot learning. Specifically, we exploit the easily available out-of-distribution samples to drive the classifier to avoid irrelevant features by maximizing the distance from prototypes to out-of-distribution samples while minimizing that of in-distribution samples (i.e., support, query data). Our approach is simple to implement, agnostic to feature extractors, lightweight without any additional cost for pre-training, and applicable to both inductive and transductive settings. Extensive experiments on various standard benchmarks demonstrate that the proposed method consistently improves the performance of pretrained networks with different architectures.
mhealth_ood_neurips_2021.pdf
In this section, we provide screenshots and list of examples that were used in the user study. Note that the name of the institution is redacted for the review. This shows an example interface for skin cancer classifier. Figure 4: Interface to display different input data types. Figure 5: List of input examples used in the user study.
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
- Health & Medicine > Consumer Health (0.93)
- (4 more...)
- North America > United States > California > Santa Cruz County > Santa Cruz (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
Enhanced Generative Model Evaluation with Clipped Density and Coverage
Salvy, Nicolas, Talbot, Hugues, Thirion, Bertrand
Although generative models have made remarkable progress in recent years, their use in critical applications has been hindered by their incapacity to reliably evaluate sample quality. Quality refers to at least two complementary concepts: fidelity and coverage. Current quality metrics often lack reliable, interpretable values due to an absence of calibration or insufficient robustness to outliers. To address these shortcomings, we introduce two novel metrics, Clipped Density and Clipped Coverage. By clipping individual sample contributions and, for fidelity, the radii of nearest neighbor balls, our metrics prevent out-of-distribution samples from biasing the aggregated values. Through analytical and empirical calibration, these metrics exhibit linear score degradation as the proportion of poor samples increases. Thus, they can be straightforwardly interpreted as equivalent proportions of good samples. Extensive experiments on synthetic and real-world datasets demonstrate that Clipped Density and Clipped Coverage outperform existing methods in terms of robustness, sensitivity, and interpretability for evaluating generative models.
- North America > United States > Virginia (0.04)
- Europe > France (0.04)
Scanning Trojaned Models Using Out-of-Distribution Samples
Scanning for trojan (backdoor) in deep neural networks is crucial due to their significant real-world applications. There has been an increasing focus on developing effective general trojan scanning methods across various trojan attacks. Despite advancements, there remains a shortage of methods that perform effectively without preconceived assumptions about the backdoor attack method. Additionally, we have observed that current methods struggle to identify classifiers trojaned using adversarial training. Motivated by these challenges, our study introduces a novel scanning method named TRODO (TROjan scanning by Detection of adversarial shifts in Out-of-distribution samples).